In [4]:
from lib.ozapfdis.git_tc import log_numstat
GIT_REPO_DIR = "../../dropover_git/"
git_log = log_numstat(GIT_REPO_DIR)[['sha', 'file']]
git_log.head()
Out[4]:
In [5]:
prod_code = git_log.copy()
prod_code = prod_code[prod_code.file.str.endswith(".java")]
prod_code = prod_code[prod_code.file.str.startswith("backend/src/main")]
prod_code = prod_code[~prod_code.file.str.endswith("package-info.java")]
prod_code.head()
Out[5]:
We want to see which files are changing (almost) together. A good start for this is to create this view onto our dataset with the pivot_table
method of the underlying pandas' DataFrame.
But before this, we need a marker column that signals that a commit occurred. We can create an additional column named hit
for this easily.
In [6]:
prod_code['hit'] = 1
prod_code.head()
Out[6]:
Now, we can transform the data as we need it: For the index, we choose the filename, as columns, we choose the unique sha
key of a commit. Together with the commit hits as values, we are now able to see which file changes occurred in which commit. Note that the pivoting also change the order of both indexes. They are now sorted alphabetically.
In [7]:
commit_matrix = prod_code.reset_index().pivot_table(
index='file',
columns='sha',
values='hit',
fill_value=0)
commit_matrix.iloc[0:5,50:55]
Out[7]:
As already mentioned in a previous blog post, we are now able to look at our problem from a mathematician' s perspective. What we have here now with the commit_matrix
is a collection of n-dimensional vectors. Each vector represents a filename and the components/dimensions of such a vector are the commits with either the value 0 or 1.
Calculating similarities between such vectors is a well-known problem with a variety of solutions. In our case, we calculate the distance between the various vectors with the cosines distance metric. The machine learning library scikit-learn provides us with an easy to use implementation.
In [9]:
import numpy as np
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram
from sklearn.datasets import load_iris
from sklearn.cluster import AgglomerativeClustering
def plot_dendrogram(model, **kwargs):
# Children of hierarchical clustering
children = model.children_
# Distances between each pair of children
# Since we don't have this information, we can use a uniform one for plotting
distance = np.arange(children.shape[0])
# The number of observations contained in each cluster level
no_of_observations = np.arange(2, children.shape[0]+2)
# Create linkage matrix and then plot the dendrogram
linkage_matrix = np.column_stack(
[children, distance, no_of_observations]).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix, orientation='right', leaf_font_size=14, **kwargs)
X= commit_matrix
model = AgglomerativeClustering(n_clusters=3)
model = model.fit(X)
plt.figure(figsize=(10,80))
modules = commit_matrix.index.str.split("/").str[6]
plt.title('Hierarchical Clustering Dendrogram for\nco-changing source code files', size="20")
plot_dendrogram(model, labels=tuple(zip(X.index, modules)))
plt.savefig('test.png', bbox_inches='tight')
plt.show()
In [10]:
from sklearn.metrics.pairwise import cosine_distances
dissimilarity_matrix = cosine_distances(commit_matrix)
dissimilarity_matrix[:5,:5]
Out[10]:
In [27]:
from sklearn.decomposition import PCA
import pandas as pd
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(X)
principalDf = pd.DataFrame(data = principalComponents
, columns = ['principal component 1', 'principal component 2'])
finalDf = principalDf.copy()
finalDf['module'] = modules
finalDf.head()
Out[27]:
In [35]:
import numpy as np
from matplotlib import cm
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA', fontsize = 20)
targets = finalDf['module'].unique()
colors = [x for x in cm.Spectral(np.linspace(0,1,len(targets)))]
for target, color in zip(targets,colors):
indicesToKeep = finalDf['module'] == target
ax.scatter(finalDf.loc[indicesToKeep, 'principal component 1']
, finalDf.loc[indicesToKeep, 'principal component 2']
, c = color
, s = 100
, alpha=0.5)
ax.legend(targets)
ax.grid()
In [41]:
import numpy as np
import matplotlib.pyplot as plt
N = 1000
xTrue = np.linspace(0, 1000, N)
yTrue = 3 * xTrue
xData = xTrue + np.random.normal(0, 100, N)
yData = yTrue + np.random.normal(0, 100, N)
xData = np.reshape(xData, (N, 1))
yData = np.reshape(yData, (N, 1))
data = np.hstack((xData, yData))
data.shape
Out[41]:
In [39]:
mu = data.mean(axis=0)
data = data - mu
# data = (data - mu)/data.std(axis=0) # Uncommenting this reproduces mlab.PCA results
eigenvectors, eigenvalues, V = np.linalg.svd(data.T, full_matrices=False)
projected_data = np.dot(data, eigenvectors)
sigma = projected_data.std(axis=0).mean()
print(eigenvectors)
fig, ax = plt.subplots()
ax.scatter(xData, yData)
for axis in eigenvectors:
start, end = mu, mu + sigma * axis
ax.annotate(
'', xy=end, xycoords='data',
xytext=start, textcoords='data',
arrowprops=dict(facecolor='red', width=2.0))
ax.set_aspect('equal')
plt.show()
In [47]:
xData = finalDf['principal component 1'].values
yData = finalDf['principal component 2'].values
data = np.stack((xData, yData)).T
data.shape
Out[47]:
In [48]:
mu = data.mean(axis=0)
data = data - mu
# data = (data - mu)/data.std(axis=0) # Uncommenting this reproduces mlab.PCA results
eigenvectors, eigenvalues, V = np.linalg.svd(data.T, full_matrices=False)
projected_data = np.dot(data, eigenvectors)
sigma = projected_data.std(axis=0).mean()
print(eigenvectors)
fig, ax = plt.subplots()
ax.scatter(xData, yData)
for axis in eigenvectors:
start, end = mu, mu + sigma * axis
ax.annotate(
'', xy=end, xycoords='data',
xytext=start, textcoords='data',
arrowprops=dict(facecolor='red', width=2.0))
ax.set_aspect('equal')
plt.show()
In [ ]:
plot_data = pd.DataFrame(index=finalDf['module'])
plot_data['value'] = tuple(zip(
finalDf['principal component 1'],
finalDf['principal component 2']))
plot_data['label'] = commit_matrix.index
plot_data['data'] = plot_data[['label', 'value']].to_dict('records')
plot_dict = plot_data.groupby(plot_data.index).data.apply(list)
plot_dict
In [ ]:
import pygal
xy_chart = pygal.XY(stroke=False)
[xy_chart.add(entry[0], entry[1]) for entry in plot_dict.iteritems()]
# uncomment to create the interactive chart
xy_chart.render_in_browser()
xy_chart
To be able to better understand the result, we add the file names from the commit_matrix
as index and column index to the dissimilarity_matrix
.
In [ ]:
import pandas as pd
dissimilarity_df = pd.DataFrame(
dissimilarity_matrix,
index=commit_matrix.index,
columns=commit_matrix.index)
dissimilarity_df.iloc[:5,:2]
Now, we see the result in a better representation: For each file pair, we get the distance of the commit vectors. This means that we have now a distance measure that says how dissimilar two files were changed in respect to each other.
In [ ]:
%matplotlib inline
import seaborn as sns
sns.heatmap(
dissimilarity_df,
xticklabels=False,
yticklabels=False
);
Because of the alphabetically ordered filenames and the "feature-first" architecture of the software under investigation, we get the first glimpse of how changes within modules are occurring together and which are not.
To get an even better view, we can first extract the module's names with an easy string operation and use this for the indexes.
In [ ]:
modules = dissimilarity_df.copy()
modules.index = modules.index.str.split("/").str[6]
modules.index.name = 'module'
modules.columns = modules.index
modules.iloc[25:30,25:30]
Then, we can create another heatmap that shows the name of the modules on both axes for further evaluation. We also just take a look at a subset of the data for representational reasons.
In [ ]:
import matplotlib.pyplot as plt
plt.figure(figsize=[10,9])
sns.heatmap(modules.iloc[:180,:180]);
With this visualization, we can get a first impression of how good our software architecture fits the real software development activities. In this case, I would say that you can see most clearly that the source code of the modules changed mostly within the module boundaries. But we have to take a look at the changes that occur in other modules as well when changing a particular module. These could be signs of unwanted dependencies and may lead us to an architectural problem.
We can create another kind of visualization to check
Here, we can help ourselves with a technique called "multi-dimensional scaling" or "MDS" for short. With MDS, we can break down an n-dimensional space to a lower-dimensional space representation. MDS tries to keep the distance proportions of the higher-dimensional space when breaking it down to a lower-dimensional space.
In our case, we can let MDS figure out a 2D representation of our dissimilarity matrix (which is, overall, just a plain multi-dimensional vector space) to see which files get change together. With this, we'll able to see which files are changes together regardless of the modules they belong to.
The machine learning library scikit-learn gives us easy access to the algorithm that we need for this task as well. We just need to say that we have a precomputed dissimilarity matrix when initializing the algorithm and then pass our dissimilarity_df
DataFrame to the fit_transform
method of the algorithm.
In [ ]:
from sklearn.manifold import MDS
# uses a fixed seed for random_state for reproducibility
model = MDS(dissimilarity='precomputed', random_state=0)
dissimilarity_2d = model.fit_transform(dissimilarity_df)
dissimilarity_2d[:5]
The result is a 2D matrix that we can plot with matplotlib
to get a first glimpse of the distribution of the calculated distances.
In [ ]:
plt.figure(figsize=(8,8))
x = dissimilarity_2d[:,0]
y = dissimilarity_2d[:,1]
plt.scatter(x, y);
With the plot above, we see that the 2D transformation somehow worked. But we can't see
So we need to enrich the data a little bit more and search for a better, interactive visualization technique.
Let's add the filenames to the matrix as well as nice column names. We, again, add the information about the module of a source code file to the DataFrame.
In [ ]:
dissimilarity_2d_df = pd.DataFrame(
dissimilarity_2d,
index=commit_matrix.index,
columns=["x", "y"])
dissimilarity_2d_df['module'] = dissimilarity_2d_df.index.str.split("/").str[6]
dissimilarity_2d_df.head()
OK, here comes the ugly part: We have to transform all the data to the format our interactive visualization library pygal needs for its XY chart. We need to
in a specific dictionary-like data structure.
But there is nothing that can hinder us in Python and pandas. So let's do this!
plot_data
with the module names as indexx
and y
into a tuple data structuredissimilarity_2d_df
's index as labelsThis gives us a new DataFrame with modules as index and per module a list of dictionary-like entries with
In [ ]:
plot_data = pd.DataFrame(index=dissimilarity_2d_df['module'])
plot_data['value'] = tuple(zip(dissimilarity_2d_df['x'], dissimilarity_2d_df['y']))
plot_data['label'] = dissimilarity_2d_df.index
plot_data['data'] = plot_data[['label', 'value']].to_dict('records')
plot_dict = plot_data.groupby(plot_data.index).data.apply(list)
plot_dict
With this nice little data structure, we can fill pygal's XY chart and create an interactive chart.
In [ ]:
import pygal
xy_chart = pygal.XY(stroke=False)
[xy_chart.add(entry[0], entry[1]) for entry in plot_dict.iteritems()]
# uncomment to create the interactive chart
xy_chart.render_in_browser()
xy_chart
This view is a pretty cool way for checking the real change behavior of your software including an architectural perspective.
Below, you see the complete data for a data point if you hover over that point:
You can see the following here:
Let's dive even deeper into the chart to get some insights that we can gain from our result.
As already seen in the heatmap, we can see that all files of the "mail" module are very close together. This means that the files changed together very often.
In the XY chart, we can see this clearly when we hover over the "mail" entry in the legend on the upper left. The corresponding data points will be magnified a little bit.
Another interesting result can be found if we take a look at the distribution of the files of the module "scheduling". Especially the data points in the lower region of the chart indicate clearly that these files were changed almost exclusive together.
In the XY chart, we can take a look at the relevant data points by selecting just the "scheduling" data points by deselecting all the other entries in the legend.
The last thing I want to show you in our example is the common change pattern for the files of the modules "comment", "framework" and "site". The files of these modules changed together very often, leading to a very mixed colored region in the upper middle. In case of our system under investigation, this is perfectly explainable: These three modules were developed at the beginning of the project. Due to many redesigns and refactorings, those files had to be changed all together. For these modules, it would make sense to only look at the recent development activities to find out if the code within these modules is still co-changing.
In the XY chart, just select the modules "comment", "framework" and "site" to see the details.
We've seen how you can check the modularization of your software system by also taking a look at the development activities that is stored in the version control system. This gives you plenty of hints if you've chosen a software architecture that also fits the commit behavior of your development teams.
But there is more that we could add here: You cannot only check for modularization. You could also e. g. take a look at the commits of your teams, spotting parts that are getting changed from too many teams. You could also see if your actions taken had any effect by checking only the recent history of the commits. You can also redefine what co-changing means e. g. you define it as files that were changed on the same day, which would kind of balance out different commit styles of different developers.
But for now, we are leaving it here. You can experiment with further options on your own. You can find the complete Jupyter notebook on GitHub.